62 research outputs found

    Strongly Secure and Efficient Data Shuffle On Hardware Enclaves

    Full text link
    Mitigating memory-access attacks on the Intel SGX architecture is an important and open research problem. A natural notion of the mitigation is cache-miss obliviousness which requires the cache-misses emitted during an enclave execution are oblivious to sensitive data. This work realizes the cache-miss obliviousness for the computation of data shuffling. The proposed approach is to software-engineer the oblivious algorithm of Melbourne shuffle on the Intel SGX/TSX architecture, where the Transaction Synchronization eXtension (TSX) is (ab)used to detect the occurrence of cache misses. In the system building, we propose software techniques to prefetch memory data prior to the TSX transaction to defend the physical bus-tapping attacks. Our evaluation based on real implementation shows that our system achieves superior performance and lower transaction abort rate than the related work in the existing literature.Comment: Systex'1

    Визначення структури будівельної технології через систему організаційних та технологічних параметрів висотного будівництва

    Get PDF
    The use of system assessment of technical and economic parameters of technologies for building construction in design and construction practice for choosing resource-saving solutions is expedient and timely. The article deals with the system of organizational and technological parameters of high-rise building, as well as the structure and structure of building technology.Використання системної оцінки техніко-економічних параметрів технологій зведення будинків в проектно-будівельній практиці для вибору ресурсозберігаючих рішень є доцільним і своєчасним. У статті розглянуто систему організаційних та технологічних параметрів висотного будівництва, а також визначено склад та структуру будівельної технології

    Machine Learning Models that Remember Too Much

    Full text link
    Machine learning (ML) is becoming a commodity. Numerous ML frameworks and services are available to data holders who are not ML experts but want to train predictive models on their data. It is important that ML models trained on sensitive inputs (e.g., personal images or documents) not leak too much information about the training data. We consider a malicious ML provider who supplies model-training code to the data holder, does not observe the training, but then obtains white- or black-box access to the resulting model. In this setting, we design and implement practical algorithms, some of them very similar to standard ML techniques such as regularization and data augmentation, that "memorize" information about the training dataset in the model yet the model is as accurate and predictive as a conventionally trained model. We then explain how the adversary can extract memorized information from the model. We evaluate our techniques on standard ML tasks for image classification (CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20 Newsgroups and IMDB). In all cases, we show how our algorithms create models that have high predictive power yet allow accurate extraction of subsets of their training data

    Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency

    Get PDF
    We propose the first linear-space searchable encryption scheme with constant locality and \emph{sublogarithmic} read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from Θ(logNloglogN)\Theta(\log N \log \log N) to O(logγN)O(\log ^{\gamma} N) where γ=23+δ\gamma=\frac{2}{3}+\delta for any fixed δ>0\delta>0. Our scheme employs four different allocation algorithms for storing the keyword lists, depending on the size of the list considered each time. For our construction we develop (i) new probability bounds for the offline two-choice allocation problem; (ii) and a new I/O-efficient oblivious RAM with O~(n1/3)\tilde{O}(n^{1/3}) bandwidth overhead and zero failure probability, both of which can be of independent interest

    Prochlo: Strong Privacy for Analytics in the Crowd

    Full text link
    The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (ESA)---for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacy-preserving software monitoring. (cont.; see the paper

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    Reducing chaos in SAT-like search: finding solutions close to a given one

    No full text
    Motivated by our own industrial users, we attack the following challenge that is crucial in many practical planning, scheduling or timetabling applications. Assume that a solver has found a solution for a given hard problem and, due to unforeseen circumstances (e.g., rescheduling), or after an analysis by a committee, a few more constraints have to be added and the solver has to be re-run. Then it is almost always important that the new solution is “close” to the original one. The activity-based variable selection heuristics used by SAT solvers make search chaotic, i.e., extremely sensitive to the initial conditions. Therefore, re-running with just one additional clause added at the end of the input usually gives a completely different solution. We show that naive approaches for finding close solutions do not work at all, and that solving the Boolean optimization problem is far too inefficient: to find a reasonably close solution, state-of-the-art tools typically require much more time than was needed to solve the original problem. Here we propose the first (to our knowledge) approach that obtains close solutions quickly. In fact, it typically finds the optimal (i.e., closest) solution in only 25% of the time the solver took in solving the original problem. Our approach requires no deep theoretical or conceptual innovations. Still, it is non-trivial to come up with and will certainly be valuable for researchers and practitioners facing the same problem.Postprint (published version

    Data Oblivious Genome Variants Search on Intel SGX

    Get PDF
    We show how to build a practical, private data oblivious genome variants search using Intel SGX. More precisely, we consider the problem posed in Track 2 of the iDash Privacy and Security Workshop 2017 competition, which was to search for variants with high χ2\chi^{2} statistic among certain genetic data over two populations. The winning solution of this iDash competition (developed by Carpov and Tortech) is extremely efficient, but not memory oblivious, which potentially made it vulnerable to a whole host of memory- and cache-based side channel attacks on SGX. In this paper, we adapt a framework in which we can exactly quantify this leakage. We provide a memory oblivious implementation with reasonable information leakage at the cost of some efficiency. Our solution is roughly an order of magnitude slower than the non-memory oblivious implementation, but still practical and much more efficient than naive memory-oblivious solutions--it solves the iDash problem in approximately 5 minutes. In order to do this, we develop novel definitions and models for oblivious dictionary merging, which may be of independent theoretical interest
    corecore